Goto

Collaborating Authors

 healthcare professional


Associating Healthcare Teamwork with Patient Outcomes for Predictive Analysis

Lu, Hsiao-Ying, Ma, Kwan-Liu

arXiv.org Artificial Intelligence

Cancer treatment outcomes are influenced not only by clinical and demographic factors but also by the collaboration of healthcare teams. However, prior work has largely overlooked the potential role of human collaboration in shaping patient survival. This paper presents an applied AI approach to uncovering the impact of healthcare professionals' (HCPs) collaboration--captured through electronic health record (EHR) systems--on cancer patient outcomes. We model EHR-mediated HCP interactions as networks and apply machine learning techniques to detect predictive signals of patient survival embedded in these collaborations. Our models are cross validated to ensure generalizability, and we explain the predictions by identifying key network traits associated with improved outcomes. Importantly, clinical experts and literature validate the relevance of the identified crucial collaboration traits, reinforcing their potential for real-world applications. This work contributes to a practical workflow for leveraging digital traces of collaboration and AI to assess and improve team-based healthcare. The approach is potentially transferable to other domains involving complex collaboration and offers actionable insights to support data-informed interventions in healthcare delivery.


Building a Silver-Standard Dataset from NICE Guidelines for Clinical LLMs

Ding, Qing, Zhang, Eric Hua Qing, Jozsa, Felix, Ive, Julia

arXiv.org Artificial Intelligence

Large language models (LLMs) are increasingly used in healthcare, yet standardised benchmarks for evaluating guideline-based clinical reasoning are missing. This study introduces a validated dataset derived from publicly available guidelines across multiple diagnoses. The dataset was created with the help of GPT and contains realistic patient scenarios, as well as clinical questions. We benchmark a range of recent popular LLMs to showcase the validity of our dataset. The framework supports systematic evaluation of LLMs' clinical utility and guideline adherence.


Adoption, usability and perceived clinical value of a UK AI clinical reference platform (iatroX): a mixed-methods formative evaluation of real-world usage and a 1,223-respondent user survey

Tytler, Kolawole

arXiv.org Artificial Intelligence

Clinicians face growing information overload from biomedical literature and guidelines, hindering evidence-based care. Retrieval-augmented generation (RAG) with large language models may provide fast, provenance-linked answers, but requires real-world evaluation. We describe iatroX, a UK-centred RAG-based clinical reference platform, and report early adoption, usability, and perceived clinical value from a formative implementation evaluation. Methods comprised a retrospective analysis of usage across web, iOS, and Android over 16 weeks (8 April-31 July 2025) and an in-product intercept survey. Usage metrics were drawn from web and app analytics with bot filtering. A client-side script randomized single-item prompts to approx. 10% of web sessions from a predefined battery assessing usefulness, reliability, and adoption intent. Proportions were summarized with Wilson 95% confidence intervals; free-text comments underwent thematic content analysis. iatroX reached 19,269 unique web users, 202,660 engagement events, and approx. 40,000 clinical queries. Mobile uptake included 1,960 iOS downloads and Android growth (peak >750 daily active users). The survey yielded 1,223 item-level responses: perceived usefulness 86.2% (95% CI 74.8-93.9%; 50/58); would use again 93.3% (95% CI 68.1-99.8%; 14/15); recommend to a colleague 88.4% (95% CI 75.1-95.9%; 38/43); perceived accuracy 75.0% (95% CI 58.8-87.3%; 30/40); reliability 79.4% (95% CI 62.1-91.3%; 27/34). Themes highlighted speed, guideline-linked answers, and UK specificity. Early real-world use suggests iatroX can mitigate information overload and support timely answers for UK clinicians. Limitations include small per-item samples and early-adopter bias; future work will include accuracy audits and prospective studies on workflow and care quality.


Facilitating the Emergence of Assistive Robots to Support Frailty: Psychosocial and Environmental Realities

Higgins, Angela, Potter, Stephen, Dragone, Mauro, Hawley, Mark, Amirabdollahian, Farshid, Di Nuovo, Alessandro, Caleb-Solly, Praminda

arXiv.org Artificial Intelligence

While assistive robots have much potential to help older people with frailty-related needs, there are few in use. There is a gap between what is developed in laboratories and what would be viable in real-world contexts. Through a series of co-design workshops (61 participants across 7 sessions) including those with lived experience of frailty, their carers, and healthcare professionals, we gained a deeper understanding of everyday issues concerning the place of new technologies in their lives. A persona-based approach surfaced emotional, social, and psychological issues. Any assistive solution must be developed in the context of this complex interplay of psychosocial and environmental factors. Our findings, presented as design requirements in direct relation to frailty, can help promote design thinking that addresses people's needs in a more pragmatic way to move assistive robotics closer to real-world use.


From Staff Messages to Actionable Insights: A Multi-Stage LLM Classification Framework for Healthcare Analytics

Sakai, Hajar, Tseng, Yi-En, Mikaeili, Mohammadsadegh, Bosire, Joshua, Jovin, Franziska

arXiv.org Artificial Intelligence

Hospital call centers serve as the primary contact point for patients within a hospital system. They also generate substantial volumes of staff messages as navigators process patient requests and communicate with the hospital offices following the established protocol restrictions and guidelines. This continuously accumulated large amount of text data can be mined and processed to retrieve insights; however, traditional supervised learning approaches require annotated data, extensive training, and model tuning. Large Language Models (LLMs) offer a paradigm shift toward more computationally efficient methodologies for healthcare analytics. This paper presents a multi-stage LLM-based framework that identifies staff message topics and classifies messages by their reasons in a multi-class fashion. In the process, multiple LLM types, including reasoning, general-purpose, and lightweight models, were evaluated. The best-performing model was o3, achieving 78.4% weighted F1-score and 79.2% accuracy, followed closely by gpt-5 (75.3% Weighted F1-score and 76.2% accuracy). The proposed methodology incorporates data security measures and HIPAA compliance requirements essential for healthcare environments. The processed LLM outputs are integrated into a visualization decision support tool that transforms the staff messages into actionable insights accessible to healthcare professionals. This approach enables more efficient utilization of the collected staff messaging data, identifies navigator training opportunities, and supports improved patient experience and care quality.


PARROT: An Open Multilingual Radiology Reports Dataset

Guellec, Bastien Le, Adambounou, Kokou, Adams, Lisa C, Agripnidis, Thibault, Ahn, Sung Soo, Chalal, Radhia Ait, Antonoli, Tugba Akinci D, Amouyel, Philippe, Andersson, Henrik, Bentegeac, Raphael, Benzoni, Claudio, Blandino, Antonino Andrea, Busch, Felix, Can, Elif, Cau, Riccardo, Cavallo, Armando Ugo, Chavihot, Christelle, Chiquete, Erwin, Cuocolo, Renato, Divjak, Eugen, Ivanac, Gordana, Macek, Barbara Dziadkowiec, Elogne, Armel, Fanni, Salvatore Claudio, Ferrarotti, Carlos, Fossataro, Claudia, Fossataro, Federica, Fulek, Katarzyna, Fulek, Michal, Gac, Pawel, Gachowska, Martyna, Juarez, Ignacio Garcia, Gatti, Marco, Gorelik, Natalia, Goulianou, Alexia Maria, Hamroun, Aghiles, Herinirina, Nicolas, Kraik, Krzysztof, Krupka, Dominik, Holay, Quentin, Kitamura, Felipe, Klontzas, Michail E, Kompanowska, Anna, Kompanowski, Rafal, Lefevre, Alexandre, Lemke, Tristan, Lindholz, Maximilian, Muller, Lukas, Macek, Piotr, Makowski, Marcus, Mannacio, Luigi, Meddeb, Aymen, Natale, Antonio, Edzang, Beatrice Nguema, Ojeda, Adriana, Park, Yae Won, Piccione, Federica, Ponsiglione, Andrea, Poreba, Malgorzata, Poreba, Rafal, Prucker, Philipp, Pruvo, Jean Pierre, Pugliesi, Rosa Alba, Rabemanorintsoa, Feno Hasina, Rafailidis, Vasileios, Resler, Katarzyna, Rotkegel, Jan, Saba, Luca, Siebert, Ezann, Stanzione, Arnaldo, Tekin, Ali Fuat, Yanchapaxi, Liz Toapanta, Triantafyllou, Matthaios, Tsaoulia, Ekaterini, Vassalou, Evangelia, Vernuccio, Federica, Wasselius, Johan, Wang, Weilang, Urban, Szymon, Wlodarczak, Adrian, Wlodarczak, Szymon, Wysocki, Andrzej, Xu, Lina, Zatonski, Tomasz, Zhang, Shuhang, Ziegelmayer, Sebastian, Kuchcinski, Gregory, Bressem, Keno K

arXiv.org Artificial Intelligence

Rationale and Objectives: To develop and validate PARROT (Polyglottal Annotated Radiology Reports for Open Testing), a large, multicentric, open-access dataset of fictional radiology reports spanning multiple languages for testing natural language processing applications in radiology. Materials and Methods: From May to September 2024, radiologists were invited to contribute fictional radiology reports following their standard reporting practices. Contributors provided at least 20 reports with associated metadata including anatomical region, imaging modality, clinical context, and for non-English reports, English translations. All reports were assigned ICD-10 codes. A human vs. AI report differentiation study was conducted with 154 participants (radiologists, healthcare professionals, and non-healthcare professionals) assessing whether reports were human-authored or AI-generated. Results: The dataset comprises 2,658 radiology reports from 76 authors across 21 countries and 13 languages. Reports cover multiple imaging modalities (CT: 36.1%, MRI: 22.8%, radiography: 19.0%, ultrasound: 16.8%) and anatomical regions, with chest (19.9%), abdomen (18.6%), head (17.3%), and pelvis (14.1%) being most prevalent. In the differentiation study, participants achieved 53.9% accuracy (95% CI: 50.7%-57.1%) in distinguishing between human and AI-generated reports, with radiologists performing significantly better (56.9%, 95% CI: 53.3%-60.6%, p<0.05) than other groups. Conclusion: PARROT represents the largest open multilingual radiology report dataset, enabling development and validation of natural language processing applications across linguistic, geographic, and clinical boundaries without privacy constraints.


AI-Enhanced Pediatric Pneumonia Detection: A CNN-Based Approach Using Data Augmentation and Generative Adversarial Networks (GANs)

Manaf, Abdul, Mughal, Nimra

arXiv.org Artificial Intelligence

Pneumonia is a leading cause of mortality in children under five, requiring accurate chest X-ray diagnosis. This study presents a machine learning-based Pediatric Chest Pneumonia Classification System to assist healthcare professionals in diagnosing pneumonia from chest X-ray images. The CNN-based model was trained on 5,863 labeled chest X-ray images from children aged 0-5 years from the Guangzhou Women and Children's Medical Center. To address limited data, we applied augmentation techniques (rotation, zooming, shear, horizontal flipping) and employed GANs to generate synthetic images, addressing class imbalance. The system achieved optimal performance using combined original, augmented, and GAN-generated data, evaluated through accuracy and F1 score metrics. The final model was deployed via a Flask web application, enabling real-time classification with probability estimates. Results demonstrate the potential of deep learning and GANs in improving diagnostic accuracy and efficiency for pediatric pneumonia classification, particularly valuable in resource-limited clinical settings https://github.com/AbdulManaf12/Pediatric-Chest-Pneumonia-Classification


Visual-Conversational Interface for Evidence-Based Explanation of Diabetes Risk Prediction

Samimi, Reza, Bhattacharya, Aditya, Gosak, Lucija, Stiglic, Gregor, Verbert, Katrien

arXiv.org Artificial Intelligence

Healthcare professionals need effective ways to use, understand, and validate AI-driven clinical decision support systems. Existing systems face two key limitations: complex visualizations and a lack of grounding in scientific evidence. We present an integrated decision support system that combines interactive visualizations with a conversational agent to explain diabetes risk assessments. We propose a hybrid prompt handling approach combining fine-tuned language models for analytical queries with general Large Language Models (LLMs) for broader medical questions, a methodology for grounding AI explanations in scientific evidence, and a feature range analysis technique to support deeper understanding of feature contributions. We conducted a mixed-methods study with 30 healthcare professionals and found that the conversational interactions helped healthcare professionals build a clear understanding of model assessments, while the integration of scientific evidence calibrated trust in the system's decisions. Most participants reported that the system supported both patient risk evaluation and recommendation.


La Leaderboard: A Large Language Model Leaderboard for Spanish Varieties and Languages of Spain and Latin America

Grandury, María, Aula-Blasco, Javier, Falcão, Júlia, Fourrier, Clémentine, González, Miguel, Martínez, Gonzalo, Santamaría, Gonzalo, Agerri, Rodrigo, Aldama, Nuria, Chiruzzo, Luis, Conde, Javier, Gómez, Helena, Guerrero, Marta, Ivetta, Guido, López, Natalia, Plaza-del-Arco, Flor Miriam, Martín-Valdivia, María Teresa, Montoro, Helena, Muñoz, Carmen, Reviriego, Pedro, Rosado, Leire, Vaca, Alejandro, Vallecillo-Rodríguez, María Estrella, Vallego, Jorge, Zubiaga, Irune

arXiv.org Artificial Intelligence

Leaderboards showcase the current capabilities and limitations of Large Language Models (LLMs). To motivate the development of LLMs that represent the linguistic and cultural diversity of the Spanish-speaking community, we present La Leaderboard, the first open-source leaderboard to evaluate generative LLMs in languages and language varieties of Spain and Latin America. La Leaderboard is a community-driven project that aims to establish an evaluation standard for everyone interested in developing LLMs for the Spanish-speaking community. This initial version combines 66 datasets in Basque, Catalan, Galician, and different Spanish varieties, showcasing the evaluation results of 50 models. To encourage community-driven development of leaderboards in other languages, we explain our methodology, including guidance on selecting the most suitable evaluation setup for each downstream task. In particular, we provide a rationale for using fewer few-shot examples than typically found in the literature, aiming to reduce environmental impact and facilitate access to reproducible results for a broader research community.


MediTools -- Medical Education Powered by LLMs

Alshatnawi, Amr, Sampaleanu, Remi, Liebovitz, David

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) has been advancing rapidly and with the advent of large language models (LLMs) in late 2022, numerous opportunities have emerged for adopting this technology across various domains, including medicine. These innovations hold immense potential to revolutionize and modernize medical education. Our research project leverages large language models to enhance medical education and address workflow challenges through the development of MediTools - AI Medical Education. This prototype application focuses on developing interactive tools that simulate real-life clinical scenarios, provide access to medical literature, and keep users updated with the latest medical news. Our first tool is a dermatology case simulation tool that uses real patient images depicting various dermatological conditions and enables interaction with LLMs acting as virtual patients. This platform allows users to practice their diagnostic skills and enhance their clinical decision-making abilities. The application also features two additional tools: an AI-enhanced PubMed tool for engaging with LLMs to gain deeper insights into research papers, and a Google News tool that offers LLM generated summaries of articles for various medical specialties. A comprehensive survey has been conducted among medical professionals and students to gather initial feedback on the effectiveness and user satisfaction of MediTools, providing insights for further development and refinement of the application. This research demonstrates the potential of AI-driven tools in transforming and revolutionizing medical education, offering a scalable and interactive platform for continuous learning and skill development.